Re: [GENERAL] slow inserts and updates on large tables - Mailing list pgsql-general

From Herouth Maoz
Subject Re: [GENERAL] slow inserts and updates on large tables
Date
Msg-id l03110701b2f0813254e6@[147.233.159.109]
Whole thread Raw
In response to Re: [GENERAL] slow inserts and updates on large tables  (jim@reptiles.org (Jim Mercer))
Responses Re: [GENERAL] slow inserts and updates on large tables
List pgsql-general
At 16:10 +0200 on 17/2/99, Jim Mercer wrote:


>
> > 3) Back to the issue of INSERTS - copies are faster. If you can transform
> >    the data into tab-delimited format as required by COPY, you save a lot
> >    of time on parsing, planning etc.
>
> this sorta defeats the purpose of putting the data in an SQL database. 8^)

You probably didn't understand me. If you convert it to tab delimited text
and then use COPY table_name FROM filename/stdin instead of INSERT, it will
be much faster, because you don't have to do the parsing and planning on
each line, but only on the whole copy.

I didn't tell you to use the data directly from those text files...

In fact, it doesn't require using text files at all, just reformatting your
program. If until now it did

- - - -

while (data_still_coming) {

       sprintf( command, "INSERT INTO table1 VALUES( %s, %s, %s )",
            item1, item2, item3 );

   PQexec( con, command );
}

- - - -

Now you have to do instead

- - - -

PQexec( con, "COPY table1 FROM stdin" );

while (data_still_coming) {

   sprintf( line, "%s\t%s\t%s\n" , item1, item2, item3 );
   PQputline( con, line );

}

PQputline( con, ".\n" );
PQendcopy(con);

- - - -

It's simply a different formatting to your data insertion.

Herouth



pgsql-general by date:

Previous
From: jim@reptiles.org (Jim Mercer)
Date:
Subject: Re: [GENERAL] slow inserts and updates on large tables
Next
From: jim@reptiles.org (Jim Mercer)
Date:
Subject: Re: [GENERAL] slow inserts and updates on large tables